学习将输入集映射到其元素元素序列上的任务对于神经网络而言是一项挑战。设置到序列问题发生在自然语言处理,计算机视觉和结构预测中,其中大集合元素之间的相互作用定义了最佳输出。模型必须表现出关系推理,处理不同的基础性并管理组合复杂性。以前的基于注意力的方法需要$ n $层的设定转换,以明确表示$ n $ th订单关系。我们的目的是增强他们通过附加相互依赖组件有效地对高阶相互作用进行有效建模的能力。我们提出了一种新型的神经集编码方法,称为“集合相互依赖变压器”,能够将集合的置换不变表示与其在任何基数集合中的元素联系起来。我们将其与置换学习模块结合到一个完整的三部分设定模型中,并在许多任务上演示其最先进的性能。这些范围从组合优化问题,到在合成和已建立的NLP数据集上的置换学习挑战到句子排序的挑战,到产品目录结构预测的新颖领域。此外,研究了网络概括到看不见的序列长度的能力,并提供了对现有方法学习高阶相互作用能力的比较经验分析。
translated by 谷歌翻译
Petrov-Galerkin formulations with optimal test functions allow for the stabilization of finite element simulations. In particular, given a discrete trial space, the optimal test space induces a numerical scheme delivering the best approximation in terms of a problem-dependent energy norm. This ideal approach has two shortcomings: first, we need to explicitly know the set of optimal test functions; and second, the optimal test functions may have large supports inducing expensive dense linear systems. Nevertheless, parametric families of PDEs are an example where it is worth investing some (offline) computational effort to obtain stabilized linear systems that can be solved efficiently, for a given set of parameters, in an online stage. Therefore, as a remedy for the first shortcoming, we explicitly compute (offline) a function mapping any PDE-parameter, to the matrix of coefficients of optimal test functions (in a basis expansion) associated with that PDE-parameter. Next, as a remedy for the second shortcoming, we use the low-rank approximation to hierarchically compress the (non-square) matrix of coefficients of optimal test functions. In order to accelerate this process, we train a neural network to learn a critical bottleneck of the compression algorithm (for a given set of PDE-parameters). When solving online the resulting (compressed) Petrov-Galerkin formulation, we employ a GMRES iterative solver with inexpensive matrix-vector multiplications thanks to the low-rank features of the compressed matrix. We perform experiments showing that the full online procedure as fast as the original (unstable) Galerkin approach. In other words, we get the stabilization with hierarchical matrices and neural networks practically for free. We illustrate our findings by means of 2D Eriksson-Johnson and Hemholtz model problems.
translated by 谷歌翻译
The outbreak of the SARS-CoV-2 pandemic has put healthcare systems worldwide to their limits, resulting in increased waiting time for diagnosis and required medical assistance. With chest radiographs (CXR) being one of the most common COVID-19 diagnosis methods, many artificial intelligence tools for image-based COVID-19 detection have been developed, often trained on a small number of images from COVID-19-positive patients. Thus, the need for high-quality and well-annotated CXR image databases increased. This paper introduces POLCOVID dataset, containing chest X-ray (CXR) images of patients with COVID-19 or other-type pneumonia, and healthy individuals gathered from 15 Polish hospitals. The original radiographs are accompanied by the preprocessed images limited to the lung area and the corresponding lung masks obtained with the segmentation model. Moreover, the manually created lung masks are provided for a part of POLCOVID dataset and the other four publicly available CXR image collections. POLCOVID dataset can help in pneumonia or COVID-19 diagnosis, while the set of matched images and lung masks may serve for the development of lung segmentation solutions.
translated by 谷歌翻译
Continual learning with an increasing number of classes is a challenging task. The difficulty rises when each example is presented exactly once, which requires the model to learn online. Recent methods with classic parameter optimization procedures have been shown to struggle in such setups or have limitations like non-differentiable components or memory buffers. For this reason, we present the fully differentiable ensemble method that allows us to efficiently train an ensemble of neural networks in the end-to-end regime. The proposed technique achieves SOTA results without a memory buffer and clearly outperforms the reference methods. The conducted experiments have also shown a significant increase in the performance for small ensembles, which demonstrates the capability of obtaining relatively high classification accuracy with a reduced number of classifiers.
translated by 谷歌翻译
Although action recognition systems can achieve top performance when evaluated on in-distribution test points, they are vulnerable to unanticipated distribution shifts in test data. However, test-time adaptation of video action recognition models against common distribution shifts has so far not been demonstrated. We propose to address this problem with an approach tailored to spatio-temporal models that is capable of adaptation on a single video sample at a step. It consists in a feature distribution alignment technique that aligns online estimates of test set statistics towards the training statistics. We further enforce prediction consistency over temporally augmented views of the same test video sample. Evaluations on three benchmark action recognition datasets show that our proposed technique is architecture-agnostic and able to significantly boost the performance on both, the state of the art convolutional architecture TANet and the Video Swin Transformer. Our proposed method demonstrates a substantial performance gain over existing test-time adaptation approaches in both evaluations of a single distribution shift and the challenging case of random distribution shifts. Code will be available at \url{https://github.com/wlin-at/ViTTA}.
translated by 谷歌翻译
Hierarchical decomposition of control is unavoidable in large dynamical systems. In reinforcement learning (RL), it is usually solved with subgoals defined at higher policy levels and achieved at lower policy levels. Reaching these goals can take a substantial amount of time, during which it is not verified whether they are still worth pursuing. However, due to the randomness of the environment, these goals may become obsolete. In this paper, we address this gap in the state-of-the-art approaches and propose a method in which the validity of higher-level actions (thus lower-level goals) is constantly verified at the higher level. If the actions, i.e. lower level goals, become inadequate, they are replaced by more appropriate ones. This way we combine the advantages of hierarchical RL, which is fast training, and flat RL, which is immediate reactivity. We study our approach experimentally on seven benchmark environments.
translated by 谷歌翻译
The number of standardized policy documents regarding climate policy and their publication frequency is significantly increasing. The documents are long and tedious for manual analysis, especially for policy experts, lawmakers, and citizens who lack access or domain expertise to utilize data analytics tools. Potential consequences of such a situation include reduced citizen governance and involvement in climate policies and an overall surge in analytics costs, rendering less accessibility for the public. In this work, we use a Latent Dirichlet Allocation-based pipeline for the automatic summarization and analysis of 10-years of national energy and climate plans (NECPs) for the period from 2021 to 2030, established by 27 Member States of the European Union. We focus on analyzing policy framing, the language used to describe specific issues, to detect essential nuances in the way governments frame their climate policies and achieve climate goals. The methods leverage topic modeling and clustering for the comparative analysis of policy documents across different countries. It allows for easier integration in potential user-friendly applications for the development of theories and processes of climate policy. This would further lead to better citizen governance and engagement over climate policies and public policy research.
translated by 谷歌翻译
Quantum entanglement is a fundamental property commonly used in various quantum information protocols and algorithms. Nonetheless, the problem of quantifying entanglement has still not reached general solution for systems larger than two qubits. In this paper, we investigate the possibility of detecting entanglement with the use of the supervised machine learning method, namely the deep convolutional neural networks. We build a model consisting of convolutional layers, which is able to recognize and predict the presence of entanglement for any bipartition of the given multi-qubit system. We demonstrate that training our model on synthetically generated datasets collecting random density matrices, which either include or exclude challenging positive-under-partial-transposition entangled states (PPTES), leads to the different accuracy of the model and its possibility to detect such states. Moreover, it is shown that enforcing entanglement-preserving symmetry operations (local operations on qubit or permutations of qubits) by using triple Siamese network, can significantly increase the model performance and ability to generalize on types of states not seen during the training stage. We perform numerical calculations for 3,4 and 5-qubit systems, therefore proving the scalability of the proposed approach.
translated by 谷歌翻译
我们提出了三种新型的修剪技术,以提高推理意识到的可区分神经结构搜索(DNAS)的成本和结果。首先,我们介绍了DNA的随机双路构建块,它可以通过内存和计算复杂性在内部隐藏尺寸上进行搜索。其次,我们在搜索过程中提出了一种在超级网的随机层中修剪块的算法。第三,我们描述了一种在搜索过程中修剪不必要的随机层的新技术。由搜索产生的优化模型称为Prunet,并在Imagenet Top-1图像分类精度的推理潜伏期中为NVIDIA V100建立了新的最先进的Pareto边界。将Prunet作为骨架还优于COCO对象检测任务的GPUNET和EFIDENENET,相对于平均平均精度(MAP)。
translated by 谷歌翻译
协作机器人将对家庭服务应用中的人类福利产生巨大影响,而高级制造业中的工业优势需要灵巧的组装。出色的挑战是为机器人指尖提供一种物理设计,使他们擅长执行需要高分辨率,校准形状重建和力传感的灵活任务。在这项工作中,我们提出了Densetact 2.0,这是一种能够可视化柔软指尖的变形表面并在神经网络中使用该图像来执行校准形状重建和6轴扳手估计的光学传感器。我们证明了用于形状重建的每个像素0.3633mm的传感器精度,0.410N的力量,扭矩为0.387mmnm,以及通过转移学习来校准新手指的能力,实现了可比性的性能,训练了四倍以上,只有12%以上数据集大小。
translated by 谷歌翻译